Goto

Collaborating Authors

 new experience


1 Details about the observation formats Figure 1: Example of the observation of WebShop The observation of WebShop is simplified based on the text_rich

Neural Information Processing Systems

The observation of WikiHow is represented in exactly the same way with Zhang et al. [2023]. Table 1: Patterns of WebShop pages Pattern Description search The page to search for an item itemlisting The page listing the search results item The information page of a specific item others The item description page, item feature page, and review pageThe similarity lookup table is defined in Table 2. 1 Table 2: Lookup table of the page similarity of WebShop search itemlisting item others search 1 0 0 0 itemlisting 0 1 0 0 item 0 0 1 0.3 others 0 0 0.3 1 2.2 Lookup table of the instruction similarity function of WikiHow Table 3. Table 3: Patterns of WikiHow instructions Pattern Name Pattern Template search Search an article to learn . . . Owing to the limit of budgets, a subset of only 20 tasks is sampled from the full test set. The visualization is available in Figure 2. It can be seen that the performance of R However, there seems to be a saturation for the performance, which may be attributed to the limited number of the active exemplars and training tasks. The saturation of the average reward comes later than that of the success rate. Double Q-Learning [van Hasselt, 2010] is usually leveraged to ameliorate over-estimation for lookup-based Q-Learning.



HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models

Neural Information Processing Systems

In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting. Despite the impressive accomplishments, large language models (LLMs), even with retrieval-augmented generation (RAG), still struggle to efficiently and effectively integrate a large amount of new experiences after pre-training. In this work, we introduce HippoRAG, a novel retrieval framework inspired by the hippocampal indexing theory of human long-term memory to enable deeper and more efficient knowledge integration over new experiences. HippoRAG synergistically orchestrates LLMs, knowledge graphs, and the Personalized PageRank algorithm to mimic the different roles of neocortex and hippocampus in human memory. We compare HippoRAG with existing RAG methods on multi-hop question answering (QA) and show that our method outperforms the state-of-the-art methods remarkably, by up to 20%. Single-step retrieval with HippoRAG achieves comparable or better performance than iterative retrieval like IRCoT while being 10-20 times cheaper and 6-13 times faster, and integrating HippoRAG into IRCoT brings further substantial gains. Finally, we show that our method can tackle new types of scenarios that are out of reach of existing methods.


Gemini is coming to Google's smart speakers and displays this fall

PCWorld

We already knew that Google Assistant was soon to be replaced by a "new experience powered by Gemini." Now we know what that new experience will be called, and when it's arriving. Gemini for Home is the name of the new Gemini-powered voice assistant for Google smart speakers and displays, including the current Nest Audio, Nest Mini, Nest Hub, and Nest Hub Max. "Early access" to Gemini for Home kicks off in October, Google announced in a blog post Wednesday, with both free and paid versions available. Google didn't say how much the paid version of Gemini for Home will cost.



Google Assistant's been having a rough few weeks. Here's Google's response

PCWorld

Nope, it's not just you: Reports of Google Assistant strugglng to perform even basic smart home commands have been surging in recent weeks, and now Google is admitting that something's amiss. The lead executive for Google's Home and Nest division tweeted on X that he's heard the complaints "loud and clear" and revealed that his team is "actively working on major improvements." "I want to acknowledge the recent feedback about Google Assistant reliability on our home devices," said Anish Kattukaran, the director of product management for Google Home and Nest. "I sincerely apologize for what you're experiencing and feeling!" Kattukaran's assurances come after a steep rise in complaints about Google Assistant on Google's Nest speakers and displays. Some users have been reporting that their Assistant routines have stopped working, while others say their Assistant-enabled devices have lost contact with smart lights, fail to play Spotify playlists, or can no longer control their Chromecast streaming devices with voice commands.


HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models

Neural Information Processing Systems

In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting. Despite the impressive accomplishments, large language models (LLMs), even with retrieval-augmented generation (RAG), still struggle to efficiently and effectively integrate a large amount of new experiences after pre-training. In this work, we introduce HippoRAG, a novel retrieval framework inspired by the hippocampal indexing theory of human long-term memory to enable deeper and more efficient knowledge integration over new experiences. HippoRAG synergistically orchestrates LLMs, knowledge graphs, and the Personalized PageRank algorithm to mimic the different roles of neocortex and hippocampus in human memory. We compare HippoRAG with existing RAG methods on multi-hop question answering (QA) and show that our method outperforms the state-of-the-art methods remarkably, by up to 20%.


Smart home got the cold shoulder at Google's I/O keynote

PCWorld

From game-changing text diffusion models and cutting-edge AR glasses to AI videos with sound and virtual clothing try-ons, there was plenty of amazing tech to see during Google's I/O keynote on Tuesday. The closest we got to a smart home shout-out was when a Google exec said that Gemini--the star of the show--is "coming to your watch, your car dashboard, even your TV." As Google puts its Google TV Streamer under the umbrella of smart home, we'll count that as a fleeting reference. Officially, Google has promised that Gemini is coming to Nest devices. Gemini on Nest speakers has been available on a public-preview basis for months now, and back in March, Google confirmed that a "new experience powered by Gemini" is coming to smart speakers and displays.


Can Generative Agents Predict Emotion?

Regan, Ciaran, Iwahashi, Nanami, Tanaka, Shogo, Oka, Mizuki

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated a number of human-like abilities, however the empathic understanding and emotional state of LLMs is yet to be aligned to that of humans. In this work, we investigate how the emotional state of generative LLM agents evolves as they perceive new events, introducing a novel architecture in which new experiences are compared to past memories. Through this comparison, the agent gains the ability to understand new experiences in context, which according to the appraisal theory of emotion is vital in emotion creation. First, the agent perceives new experiences as time series text data. After perceiving each new input, the agent generates a summary of past relevant memories, referred to as the norm, and compares the new experience to this norm. Through this comparison we can analyse how the agent reacts to the new experience in context. The PANAS, a test of affect, is administered to the agent, capturing the emotional state of the agent after the perception of the new event. Finally, the new experience is then added to the agents memory to be used in the creation of future norms. By creating multiple experiences in natural language from emotionally charged situations, we test the proposed architecture on a wide range of scenarios. The mixed results suggests that introducing context can occasionally improve the emotional alignment of the agent, but further study and comparison with human evaluators is necessary. We hope that this paper is another step towards the alignment of generative agents.


AI-driven platform Play Anywhere launches game-changing partnership to reimagine interactive TV sports rights

FOX News

Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. As artificial intelligence continues to completely change the way millions of fans interact with live sporting events, a platform is introducing an innovative approach to monetization. Technology company Play Anywhere has developed a proven track record of increasing fan engagement and creating new revenue streams for its partners. The technology can be seemingly integrated into mobile devices, connected televisions or various streaming devices.